In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.
Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other quantile.
When using probability theory to analyze order statistics of from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.
Notation and examples
For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are
- 6, 9, 3, 7,
the order statistics would be denoted
where the subscript enclosed in parentheses indicates the th order statistic of the sample.
The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,
where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.
Similarly, for a sample of size , the th order statistic (or largest order statistic) is the maximum, that is,
The sample range is the difference between the maximum and minimum. It is a function of the order statistics:
A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.
The sample median may or may not be an order statistic, since there is a single middle value only when the number of observations is odd. More precisely, if for some integer , then the sample median is and so is an order statistic. On the other hand, when is even, and there are two middle values, and , and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.
Probabilistic analysis
Given any random variables
X1,
X2, ...,
X n, the order statistics X
(1), X
(2), ..., X
( n) are also random variables, defined by sorting the values (realizations) of
X1, ...,
X n in increasing order.
When the random variables X1, X2, ..., X n form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables X1, ..., X n can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem.
From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.
Cumulative distribution function of order statistics
For a random sample as above, with cumulative distribution
, the order statistics for that sample have cumulative distributions as follows
(where
r specifies which order statistic):
The proof of this formula is pure
combinatorics: for the
th order statistic to be
, the number of samples that are
has to be between
and
. In the case that
is the largest order statistic
, there has to be
samples
(each with an independent probability of
) and
samples
(each with an independent probability of
). Finally there are
different ways of choosing which of the
samples are of the
kind.
The corresponding probability density function may be derived from this result, and is found to be
Moreover, there are two special cases, which have CDFs that are easy to compute.
Which can be derived by careful consideration of probabilities.
Probability distributions of order statistics
Order statistics sampled from a uniform distribution
In this section we show that the order statistics of the uniform distribution on the
unit interval have marginal distributions belonging to the beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.
We assume throughout this section that is a random sample drawn from a continuous distribution with cdf . Denoting we obtain the corresponding random sample from the standard uniform distribution. Note that the order statistics also satisfy .
The probability density function of the order statistic is equal to[.]
that is, the kth order statistic of the uniform distribution is a beta-distributed random variable.
The proof of these statements is as follows. For to be between u and u + du, it is necessary that exactly k − 1 elements of the sample are smaller than u, and that at least one is between u and u + d u. The probability that more than one is in this latter interval is already , so we have to calculate the probability that exactly k − 1, 1 and n − k observations fall in the intervals , and respectively. This equals (refer to multinomial distribution for details)
and the result follows.
The mean of this distribution is k / ( n + 1).
The joint distribution of the order statistics of the uniform distribution
Similarly, for
i <
j, the joint probability density function of the two order statistics
U( i) <
U( j) can be shown to be
which is (up to terms of higher order than ) the probability that i − 1, 1, j − 1 − i, 1 and n − j sample elements fall in the intervals , , , , respectively.
One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant:
One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/ n! is the volume of the region
The mutual information between uniform order statistics is given by
I(U_{(r)}; U_{(m)}) = T_{m-1} + T_{n-r} - T_{m-r+1} - T_n
where
T_k = \log(k!) - kH_k
where
H_k is the
k-th harmonic number.
Application: Non-parametric density estimation
Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator.
Suppose, we want to estimate the density
f_{X} at the point
x^*. Consider the random variables
Y_i = |X_i - x^*|, which are i.i.d with distribution function
g_Y(y) = f_X(y + x^*) + f_X(x^* - y). In particular,
f_X(x^*) = \frac{g_Y(0)}{2}.
The expected value of the first order statistic Y_{(1)} given a sample of N total observations yields,
- E(Y_{(1)}) = \frac{1}{(N+1) g(0)} + \frac{1}{(N+1)(N+2)} \int_{0}^{1} Q''(z) \delta_{N+1}(z) \, dz
where Q is the quantile function associated with the distribution g_{Y}, and \delta_N(z) = (N+1)(1-z)^N. This equation in combination with a jackknifing technique becomes the basis for the following density estimation algorithm,
Input: A sample of N observations. \{x_\ell\}_{\ell=1}^M points of density evaluation. Tuning parameter a \in (0,1) (usually 1/3).
Output: \{\hat{f}_\ell\}_{\ell=1}^M estimated density at the points of evaluation.
1: Set m_N = \operatorname{round}(N^{1-a})
2: Set s_N = \frac{N}{m_N}
3: Create an s_N \times m_N matrix M_{ij} which holds m_N subsets with s_N observations each.
4: Create a vector \hat{f} to hold the density evaluations.
5: '''for''' \ell = 1 \to M '''do'''
6: '''for''' k = 1 \to m_N '''do'''
7: Find the nearest distance d_{\ell k} to the current point x_\ell within the kth subset
8: '''end for'''
9: Compute the subset average of distances to x_\ell:d_\ell = \sum_{k=1}^{m_N} \frac{d_{\ell k}}{m_N}
10: Compute the density estimate at x_\ell:\hat{f}_\ell = \frac{1}{2 (1+ s_N) d_\ell}
11: '''end for'''
12: '''return''' \hat{f}
In contrast to the bandwidth/length based tuning parameters for histogram and kernel based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as IQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true.
Dealing with discrete variables
Suppose
X_1,X_2,\ldots,X_n are i.i.d. random variables from a discrete distribution with cumulative distribution function
F(x) and probability mass function
f(x). To find the probabilities of the
k^\text{th} order statistics, three values are first needed, namely
- p_1=P(Xx)=1-F(x).
The cumulative distribution function of the k^\text{th} order statistic can be computed by noting that
\begin{align}
P(X_{(k)}\leq x)& =P(\text{there are at least }k\text{ observations less than or equal to }x) ,\\
& =P(\text{there are at most }n-k\text{ observations greater than }x) ,\\
& =\sum_{j=0}^{n-k}{n\choose j}p_3^j(p_1+p_2)^{n-j} .
\end{align}
Similarly, P(X_{(k)} is given by
\begin{align}
P(X_{(k)}< x)& =P(\text{there are at least }k\text{ observations less than }x) ,\\
& =P(\text{there are at most }n-k\text{ observations greater than or equal to }x) ,\\
& =\sum_{j=0}^{n-k}{n\choose j}(p_2+p_3)^j(p_1)^{n-j} .
\end{align}
Note that the probability mass function of X_{(k)} is just the difference of these values, that is to say
\begin{align}
P(X_{(k)}=x)&=P(X_{(k)}\leq x)-P(X_{(k)}< x) ,\\
&=\sum_{j=0}^{n-k}{n\choose j}\left(p_3^j(p_1+p_2)^{n-j}-(p_2+p_3)^j(p_1)^{n-j}\right) ,\\
&=\sum_{j=0}^{n-k}{n\choose j}\left((1-F(x))^j(F(x))^{n-j}-(1-F(x)+f(x))^j(F(x)-f(x))^{n-j}\right).
\end{align}
Computing order statistics
The problem of computing the
kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log
n). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(
n log
n).
Applications
Order statistics have a lot of applications in areas as reliability theory, financial mathematics, survival analysis, epidemiology, sports, quality control, actuarial risk, etc. There is an extensive literature devoted to studies on applications of order statistics in these fields.
For example, a recent application in actuarial risk can be found in, where some weighted premium principles in terms of record claims and kth record claims are provided.
See also
-
Rankit
-
Box plot
-
BRS-inequality
-
Concomitant (statistics)
-
Fisher–Tippett distribution
-
Bapat–Beg theorem for the order statistics of independent but not necessarily identically distributed random variables
-
Bernstein polynomial
-
L-estimator – linear combinations of order statistics
-
Rank-size distribution
-
Selection algorithm
Examples of order statistics
External links